Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming

نویسنده

  • R. Cominetti
چکیده

u̇(t) = −∇xf(u(t), r(t)), u(t0) = u0 where f(x, r) is the exponential penalty function associated with the linear program min{c′x : Ax ≤ b}, and r(t) decreases to 0 as t goes to ∞. We show that for each initial condition (t0, u0) the solution u(t) is defined on the whole interval [t0,∞) and, under suitable hypothesis on the rate of decrease of r(t), we establish the convergence of u(t) towards an optimal solution u∞ of the linear program. In particular we find sufficient conditions for u∞ to coincide with the limit of the unique minimizer x(r) of f(·, r).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the convergence speed of artificial neural networks in‎ ‎the solving of linear ‎systems

‎Artificial neural networks have the advantages such as learning, ‎adaptation‎, ‎fault-tolerance‎, ‎parallelism and generalization‎. ‎This ‎paper is a scrutiny on the application of diverse learning methods‎ ‎in speed of convergence in neural networks‎. ‎For this aim‎, ‎first we ‎introduce a perceptron method based on artificial neural networks‎ ‎which has been applied for solving a non-singula...

متن کامل

A Dynamical System Associated with Newton’s Method for Parametric Approximations of Convex Minimization Problems∗

We study the existence and asymptotic convergence when t→ +∞ for the trajectories generated by ∇f(u(t), (t))u̇(t) + ̇(t) ∂2f ∂ ∂x (u(t), (t)) +∇f(u(t), (t)) = 0 where {f(·, )} >0 is a parametric family of convex functions which approximates a given convex function f we want to minimize, and (t) is a parametrization such that (t) → 0 when t → +∞. This method is obtained from the following variati...

متن کامل

Superlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis

We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...

متن کامل

Using an Efficient Penalty Method for Solving Linear Least Square Problem with Nonlinear Constraints

In this paper, we use a penalty method for solving the linear least squares problem with nonlinear constraints. In each iteration of penalty methods for solving the problem, the calculation of projected Hessian matrix is required. Given that the objective function is linear least squares, projected Hessian matrix of the penalty function consists of two parts that the exact amount of a part of i...

متن کامل

Effects of Probability Function on the Performance of Stochastic Programming

Stochastic programming is a valuable optimization tool where used when some or all of the design parameters of an optimization problem are defined by stochastic variables rather than by deterministic quantities. Depending on the nature of equations involved in the problem, a stochastic optimization problem is called a stochastic linear or nonlinear programming problem. In this paper,a stochasti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995